日益复杂的机器学习模型的不断增长的计算需求通常需要使用强大的基于云的基础架构进行培训。已知二元神经网络由于其极端的计算和内存节省了更高精确的替代方案,因此有望进行现场推断。但是,他们现有的训练方法需要同时存储所有层的高精度激活,这通常使在内存受限的设备上学习不可行。在本文中,我们证明了二进制神经网络训练所需的向后传播操作对量化非常强大,从而使现代模型的现场学习成为实用命题。我们介绍了一种低成本的二元神经网络训练策略,该策略表现出相当大的记忆范围减少,同时几乎没有准确的损失与Courbariaux&Bengio的标准方法。这些减少主要是通过仅以二进制格式保留激活来实现的。在后一种算法上,我们的置换替换量看到记忆需求减少3--5 $ \ times $,同时在可比时间内达到相似的测试准确性,这些型号跨越了一系列经过培训的小型模型,用于对流行数据集进行分类。我们还展示了对二进制RESNET-18的从划痕成像网训练,并实现了3.78 $ \ times $减少内存。我们的工作是开源的,包括覆盆子Pi靶向原型,我们用来验证建模的内存降低并捕获相关的能量滴。这样的节省将避免不必要的云下载,减少延迟,提高能源效率和保护最终用户的隐私。
translated by 谷歌翻译
Considering the computation complexity, we propose a Guided Hybrid Quantization with One-to-one Self-Teaching (GHOST}) framework. More concretely, we first design a structure called guided quantization self-distillation (GQSD), which is an innovative idea for realizing lightweight through the synergy of quantization and distillation. The training process of the quantization model is guided by its full-precision model, which is time-saving and cost-saving without preparing a huge pre-trained model in advance. Second, we put forward a hybrid quantization (HQ) module to obtain the optimal bit width automatically under a constrained condition where a threshold for distribution distance between the center and samples is applied in the weight value search space. Third, in order to improve information transformation, we propose a one-to-one self-teaching (OST) module to give the student network a ability of self-judgment. A switch control machine (SCM) builds a bridge between the student network and teacher network in the same location to help the teacher to reduce wrong guidance and impart vital knowledge to the student. This distillation method allows a model to learn from itself and gain substantial improvement without any additional supervision. Extensive experiments on a multimodal dataset (VEDAI) and single-modality datasets (DOTA, NWPU, and DIOR) show that object detection based on GHOST outperforms the existing detectors. The tiny parameters (<9.7 MB) and Bit-Operations (BOPs) (<2158 G) compared with any remote sensing-based, lightweight or distillation-based algorithms demonstrate the superiority in the lightweight design domain. Our code and model will be released at https://github.com/icey-zhang/GHOST.
translated by 谷歌翻译
Deep learning classifiers provide the most accurate means of automatically diagnosing diabetic retinopathy (DR) based on optical coherence tomography (OCT) and its angiography (OCTA). The power of these models is attributable in part to the inclusion of hidden layers that provide the complexity required to achieve a desired task. However, hidden layers also render algorithm outputs difficult to interpret. Here we introduce a novel biomarker activation map (BAM) framework based on generative adversarial learning that allows clinicians to verify and understand classifiers decision-making. A data set including 456 macular scans were graded as non-referable or referable DR based on current clinical standards. A DR classifier that was used to evaluate our BAM was first trained based on this data set. The BAM generation framework was designed by combing two U-shaped generators to provide meaningful interpretability to this classifier. The main generator was trained to take referable scans as input and produce an output that would be classified by the classifier as non-referable. The BAM is then constructed as the difference image between the output and input of the main generator. To ensure that the BAM only highlights classifier-utilized biomarkers an assistant generator was trained to do the opposite, producing scans that would be classified as referable by the classifier from non-referable scans. The generated BAMs highlighted known pathologic features including nonperfusion area and retinal fluid. A fully interpretable classifier based on these highlights could help clinicians better utilize and verify automated DR diagnosis.
translated by 谷歌翻译
Hashing has been widely researched to solve the large-scale approximate nearest neighbor search problem owing to its time and storage superiority. In recent years, a number of online hashing methods have emerged, which can update the hash functions to adapt to the new stream data and realize dynamic retrieval. However, existing online hashing methods are required to update the whole database with the latest hash functions when a query arrives, which leads to low retrieval efficiency with the continuous increase of the stream data. On the other hand, these methods ignore the supervision relationship among the examples, especially in the multi-label case. In this paper, we propose a novel Fast Online Hashing (FOH) method which only updates the binary codes of a small part of the database. To be specific, we first build a query pool in which the nearest neighbors of each central point are recorded. When a new query arrives, only the binary codes of the corresponding potential neighbors are updated. In addition, we create a similarity matrix which takes the multi-label supervision information into account and bring in the multi-label projection loss to further preserve the similarity among the multi-label data. The experimental results on two common benchmarks show that the proposed FOH can achieve dramatic superiority on query time up to 6.28 seconds less than state-of-the-art baselines with competitive retrieval accuracy.
translated by 谷歌翻译
在本文中,我们介绍了2022年多模式情感分析挑战(MUSE)的解决方案,其中包括Muse-Humor,Muse-Rection和Muse Surns Sub-Challenges。 2022年穆斯穆斯(Muse 2022)着重于幽默检测,情绪反应和多模式的情感压力,利用不同的方式和数据集。在我们的工作中,提取了不同种类的多模式特征,包括声学,视觉,文本和生物学特征。这些功能由Temma和Gru融合到自发机制框架中。在本文中,1)提取了一些新的音频功能,面部表达功能和段落级文本嵌入以进行准确的改进。 2)我们通过挖掘和融合多模式特征来显着提高多模式情感预测的准确性和可靠性。 3)在模型培训中应用有效的数据增强策略,以减轻样本不平衡问题并防止模型形成学习有偏见的主题字符。对于博物馆的子挑战,我们的模型获得了0.8932的AUC分数。对于Muse Rection子挑战,我们在测试集上的Pearson相关系数为0.3879,它的表现优于所有其他参与者。对于Muse Surst Sub-Challenge,我们的方法在测试数据集上的唤醒和价值都优于基线,达到了0.5151的最终综合结果。
translated by 谷歌翻译
时间序列预测在许多现实世界中都起着重要的作用,例如设备生命周期预测,天气预报和交通流量预测。从最近的研究中可以看出,各种基于变压器的模型在预测时间序列中显示出了显着的结果。但是,仍然有一些问题限制了在时间序列预测任务上基于变压器的模型的能力:(i)直接在原始数据上学习由于其复杂且不稳定的功能表示,因此对噪声易受噪声; (ii)自我发挥的机制不足以对变化的特征和时间依赖性的关注不足。为了解决这两个问题,我们提出了一个基于变压器的差异重构注意模型Draformer。具体而言,Draformer具有以下创新:(i)对差异序列进行学习,该序列通过差异和突出序列的变化属性来保留清晰和稳定的序列特征; (ii)重建的注意力:综合距离注意力通过可学习的高斯内核表现出顺序距离,分布式差异注意通过将差异序列映射到适应性特征空间来计算分布差异,并且两者的组合有效地集中在具有显着关联的序列上; (iii)重建的解码器输入,该输入通过集成变异信息和时间相关来提取序列特征,从而获得了更全面的序列表示。在四个大型数据集上进行的广泛实验表明,Draformer的表现优于最先进的基线。
translated by 谷歌翻译
化学反应预测,涉及正向合成和逆合合成预测,是有机合成中的一个基本问题。流行的计算范式将综合预测作为序列到序列翻译问题,其中采用典型的微笑来分子表示。然而,通用微笑忽略了化学反应的特征,其中分子图拓扑在很大程度上从反应物到产物不变,如果直接施加了笑容,则会导致微笑的次优性能。在本文中,我们提出了与根对准的微笑(R-Smiles),该微笑指定了产品和反应物微笑之间的紧密比对以进行更有效的合成预测。由于严格的一对一映射和降低的编辑距离,计算模型很大程度上免于学习复杂的语法,并致力于学习反应的化学知识。我们将提出的R-Smiles与各种最新基准进行比较,并表明它明显优于所有基准,这表明了所提出的方法的优越性。
translated by 谷歌翻译
在线持续学习(OCL)旨在通过单个通过数据从非平稳数据流进行逐步训练神经网络。基于彩排的方法试图用少量的内存近似观察到的输入分布,并以后重新审视它们以避免忘记。尽管具有强烈的经验表现,但排练方法仍然遭受了过去数据损失景观和记忆样本的差异。本文重新讨论了在线设置中的排练动态。我们从偏见和动态的经验风险最小化的角度从固有的内存过度拟合风险中提供了理论见解,并检查重复排练的优点和限制。受我们的分析的启发,一个简单而直观的基线,重复的增强彩排(RAR)旨在解决在线彩排的拟合不足的困境。令人惊讶的是,在四个相当不同的OCL基准测试中,这种简单的基线表现优于香草排练9%-17%,并且显着改善了基于最新的彩排方法miR,ASER和SCR。我们还证明,RAR成功地实现了过去数据的损失格局和其学习轨迹中的高损失山脊厌恶的准确近似。进行了广泛的消融研究,以研究重复和增强彩排和增强学习(RL)之间的相互作用(RL),以动态调整RAR的超参数以平衡在线稳定性 - 塑性权衡折衷。
translated by 谷歌翻译
视觉变压器的最新进展在基于点产生自我注意的新空间建模机制驱动的各种任务中取得了巨大成功。在本文中,我们表明,视觉变压器背后的关键要素,即输入自适应,远程和高阶空间相互作用,也可以通过基于卷积的框架有效地实现。我们介绍了递归封闭式卷积($ \ textit {g}^\ textit {n} $ conv),该卷积{n} $ conv)与封闭的卷积和递归设计执行高阶空间交互。新操作是高度灵活和可定制的,它与卷积的各种变体兼容,并将自我注意的两阶相互作用扩展到任意订单,而无需引入大量额外的计算。 $ \ textit {g}^\ textit {n} $ conv可以用作插件模块,以改善各种视觉变压器和基于卷积的模型。根据该操作,我们构建了一个名为Hornet的新型通用视觉骨干家族。关于ImageNet分类,可可对象检测和ADE20K语义分割的广泛实验表明,大黄蜂的表现优于Swin变形金刚,并具有相似的整体体系结构和训练配置的明显边距。大黄蜂还显示出对更多训练数据和更大模型大小的有利可伸缩性。除了在视觉编码器中的有效性外,我们还可以将$ \ textit {g}^\ textit {n} $ conv应用于特定于任务的解码器,并始终通过较少的计算来提高密集的预测性能。我们的结果表明,$ \ textIt {g}^\ textit {n} $ conv可以成为视觉建模的新基本模块,可有效结合视觉变形金刚和CNN的优点。代码可从https://github.com/raoyongming/hornet获得
translated by 谷歌翻译
当前的Modus Operandi在改编预训练的模型中涉及更新所有骨干参数,即,完整的微调。本文介绍了视觉及时调整(VPT),作为视觉中大规模变压器模型的全面微调的有效替代方案。VPT从最近有效地调整大型语言模型的最新进展中汲取灵感,在输入空间中仅引入了少量的可训练参数(少于模型参数),同时保持模型骨架冻结。通过对各种下游识别任务的广泛实验,我们表明VPT与其他参数有效调整协议相比获得了显着的性能增长。最重要的是,在许多情况下,VPT甚至在模型能力和培训数据量表的许多情况下都胜过全面的微调,同时降低了每任务的存储成本。
translated by 谷歌翻译